深度学习表明,针对不同领域(例如图像和语音识别)的传统机器学习方法有了重大改进。他们在基准数据集上的成功通过从业人员通过验证的模型转移到现实世界中。使用监督学习预处理的视觉模型需要大量昂贵的数据注释。为了应对这一限制,已经提出了DeepCluster(一种简单且可扩展的视觉表示预处理)。但是,该模型的基本工作尚不清楚。在本文中,我们分析了DeepCluster内部质量,并详尽地评估了各种超参数在三个不同数据集上的影响。因此,我们提出了一个解释算法在实践中起作用的原因。我们还表明,深簇收敛和性能高度取决于卷积层随机初始化过滤器的质量与所选簇数的相互作用。此外,我们证明连续聚类对于深簇收敛并不重要。因此,聚类阶段的早期停止将减少训练时间,并允许算法扩展到大型数据集。最后,我们在半监督环境中得出了合理的超参数选择标准。
translated by 谷歌翻译
由于早期的机器学习模型,诸如准确性和精确度等指标已成为评估和比较训练模型的事实上的方法。但是,单个度量号并未完全捕获模型之间的相似性和差异,尤其是在计算机视觉域中。在某个数据集上具有很高精度的模型可能会在另一个数据集上提供较低的精度,而无需任何进一步的见解。为了解决这个问题,我们基于一种称为Disect的最新可解释性技术,以引入\ textit {模型可解释性},该技术根据他们所学的视觉概念(例如对象和材料)来确定模型如何相互联系或补充。为了实现这一目标,我们将13个表现最佳的自制模型投射到一个学习的概念(LCE)空间中,该概念从学识渊博的概念的角度揭示了模型之间的邻近。我们将这些模型的性能进一步跨越了四个计算机视觉任务和15个数据集。该实验使我们能够将模型分为三类,并首次揭示了不同任务所需的视觉概念类型。这是设计跨任务学习算法的一步。
translated by 谷歌翻译
当前信息时代在互联网上产生的数据的指数增长是数字经济的推动力。信息提取是累积大数据中的主要价值。对统计分析和手工设计的规则机器学习算法的大数据依赖性被人类语言固有的巨大复杂性所淹没。自然语言处理(NLP)正在装备机器,以了解这些人类多样化和复杂的语言。文本分类是一个NLP任务,它会自动识别基于预定义或未定标记的集合的模式。常见的文本分类应用程序包括信息检索,建模新闻主题,主题提取,情感分析和垃圾邮件检测。在文本中,某些单词序列取决于上一个或下一个单词序列以使其充分含义。这是一项具有挑战性的依赖性任务,要求机器能够存储一些以前的重要信息以影响未来的含义。诸如RNN,GRU和LSTM之类的序列模型是具有长期依赖性任务的突破。因此,我们将这些模型应用于二进制和多类分类。产生的结果非常出色,大多数模型在80%和94%的范围内执行。但是,这个结果并不详尽,因为我们认为如果机器要与人类竞争,可以改进。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
This paper presents our solutions for the MediaEval 2022 task on DisasterMM. The task is composed of two subtasks, namely (i) Relevance Classification of Twitter Posts (RCTP), and (ii) Location Extraction from Twitter Texts (LETT). The RCTP subtask aims at differentiating flood-related and non-relevant social posts while LETT is a Named Entity Recognition (NER) task and aims at the extraction of location information from the text. For RCTP, we proposed four different solutions based on BERT, RoBERTa, Distil BERT, and ALBERT obtaining an F1-score of 0.7934, 0.7970, 0.7613, and 0.7924, respectively. For LETT, we used three models namely BERT, RoBERTa, and Distil BERTA obtaining an F1-score of 0.6256, 0.6744, and 0.6723, respectively.
translated by 谷歌翻译
In recent years, social media has been widely explored as a potential source of communication and information in disasters and emergency situations. Several interesting works and case studies of disaster analytics exploring different aspects of natural disasters have been already conducted. Along with the great potential, disaster analytics comes with several challenges mainly due to the nature of social media content. In this paper, we explore one such challenge and propose a text classification framework to deal with Twitter noisy data. More specifically, we employed several transformers both individually and in combination, so as to differentiate between relevant and non-relevant Twitter posts, achieving the highest F1-score of 0.87.
translated by 谷歌翻译
Osteoarthritis (OA) is the most prevalent chronic joint disease worldwide, where knee OA takes more than 80% of commonly affected joints. Knee OA is not a curable disease yet, and it affects large columns of patients, making it costly to patients and healthcare systems. Etiology, diagnosis, and treatment of knee OA might be argued by variability in its clinical and physical manifestations. Although knee OA carries a list of well-known terminology aiming to standardize the nomenclature of the diagnosis, prognosis, treatment, and clinical outcomes of the chronic joint disease, in practice there is a wide range of terminology associated with knee OA across different data sources, including but not limited to biomedical literature, clinical notes, healthcare literacy, and health-related social media. Among these data sources, the scientific articles published in the biomedical literature usually make a principled pipeline to study disease. Rapid yet, accurate text mining on large-scale scientific literature may discover novel knowledge and terminology to better understand knee OA and to improve the quality of knee OA diagnosis, prevention, and treatment. The present works aim to utilize artificial neural network strategies to automatically extract vocabularies associated with knee OA diseases. Our finding indicates the feasibility of developing word embedding neural networks for autonomous keyword extraction and abstraction of knee OA.
translated by 谷歌翻译
Neural models that do not rely on pre-training have excelled in the keyphrase generation task with large annotated datasets. Meanwhile, new approaches have incorporated pre-trained language models (PLMs) for their data efficiency. However, there lacks a systematic study of how the two types of approaches compare and how different design choices can affect the performance of PLM-based models. To fill in this knowledge gap and facilitate a more informed use of PLMs for keyphrase extraction and keyphrase generation, we present an in-depth empirical study. Formulating keyphrase extraction as sequence labeling and keyphrase generation as sequence-to-sequence generation, we perform extensive experiments in three domains. After showing that PLMs have competitive high-resource performance and state-of-the-art low-resource performance, we investigate important design choices including in-domain PLMs, PLMs with different pre-training objectives, using PLMs with a parameter budget, and different formulations for present keyphrases. Further results show that (1) in-domain BERT-like PLMs can be used to build strong and data-efficient keyphrase generation models; (2) with a fixed parameter budget, prioritizing model depth over width and allocating more layers in the encoder leads to better encoder-decoder models; and (3) introducing four in-domain PLMs, we achieve a competitive performance in the news domain and the state-of-the-art performance in the scientific domain.
translated by 谷歌翻译
Privacy policies provide individuals with information about their rights and how their personal information is handled. Natural language understanding (NLU) technologies can support individuals and practitioners to understand better privacy practices described in lengthy and complex documents. However, existing efforts that use NLU technologies are limited by processing the language in a way exclusive to a single task focusing on certain privacy practices. To this end, we introduce the Privacy Policy Language Understanding Evaluation (PLUE) benchmark, a multi-task benchmark for evaluating the privacy policy language understanding across various tasks. We also collect a large corpus of privacy policies to enable privacy policy domain-specific language model pre-training. We demonstrate that domain-specific pre-training offers performance improvements across all tasks. We release the benchmark to encourage future research in this domain.
translated by 谷歌翻译
While pre-trained language models (LM) for code have achieved great success in code completion, they generate code conditioned only on the contents within the file, i.e., in-file context, but ignore the rich semantics in other files within the same project, i.e., cross-file context, a critical source of information that is especially useful in modern modular software development. Such overlooking constrains code language models' capacity in code completion, leading to unexpected behaviors such as generating hallucinated class member functions or function calls with unexpected arguments. In this work, we develop a cross-file context finder tool, CCFINDER, that effectively locates and retrieves the most relevant cross-file context. We propose CoCoMIC, a framework that incorporates cross-file context to learn the in-file and cross-file context jointly on top of pretrained code LMs. CoCoMIC successfully improves the existing code LM with a 19.30% relative increase in exact match and a 15.41% relative increase in identifier matching for code completion when the cross-file context is provided.
translated by 谷歌翻译